由于许多科学和工程领域的深层神经网络越来越多,建模和估计其不确定性已成为主要的重要性。已经研究了包括贝叶斯神经网络,合奏,确定性近似等各种方法。尽管关于深度学习中不确定性量化的垃圾越来越多,但不确定性估计的质量仍然是一个悬而未决的问题。在这项工作中,我们试图通过评估置信区的质量以及生成的样品代表未知目标分布的方式来评估几种算法在采样和回归任务上的性能。为此,考虑了几个采样和回归任务,并根据覆盖概率,内核化的Stein差异和最大平均差异进行比较所选算法。
translated by 谷歌翻译
可变重要性措施是分析随机林的黑盒机制的主要工具。虽然平均值降低精度(MDA)被广泛接受作为随机森林最有效的可变重要性措施,但对其统计特性知之甚少。实际上,确切的MDA定义在主随机林软件上变化。在本文中,我们的目标是严格分析主要MDA实施的行为。因此,我们在数学上正式地形化各种实施的MDA算法,然后在样本量增加时建立限制。特别是,我们在三个组件中分解了这些限制:第一个与Sobol指数有关,这是对响应方差的协变度贡献的明确定义措施,广泛应用于敏感性分析领域,而不是TheThird术语,谁的价值随着协变量的依赖而增加。因此,我们理论上证明了MDA在协变者依赖时,MDA不会瞄准正确的数量,这是实验发现的事实。为了解决这个问题,我们为随机林,Sobol-MDA定义了一个新的重要性测量,它修复了原始MDA的缺陷。我们证明了Sobol-MDA的一致性,并表明Sobol-MDA在模拟和实际数据上经验胜过其竞争对手。 R和C ++中的开源实现可在线获取。
translated by 谷歌翻译
Existing analyses of neural network training often operate under the unrealistic assumption of an extremely small learning rate. This lies in stark contrast to practical wisdom and empirical studies, such as the work of J. Cohen et al. (ICLR 2021), which exhibit startling new phenomena (the "edge of stability" or "unstable convergence") and potential benefits for generalization in the large learning rate regime. Despite a flurry of recent works on this topic, however, the latter effect is still poorly understood. In this paper, we take a step towards understanding genuinely non-convex training dynamics with large learning rates by performing a detailed analysis of gradient descent for simplified models of two-layer neural networks. For these models, we provably establish the edge of stability phenomenon and discover a sharp phase transition for the step size below which the neural network fails to learn "threshold-like" neurons (i.e., neurons with a non-zero first-layer bias). This elucidates one possible mechanism by which the edge of stability can in fact lead to better generalization, as threshold neurons are basic building blocks with useful inductive bias for many tasks.
translated by 谷歌翻译
We introduce the XPER (eXplainable PERformance) methodology to measure the specific contribution of the input features to the predictive or economic performance of a model. Our methodology offers several advantages. First, it is both model-agnostic and performance metric-agnostic. Second, XPER is theoretically founded as it is based on Shapley values. Third, the interpretation of the benchmark, which is inherent in any Shapley value decomposition, is meaningful in our context. Fourth, XPER is not plagued by model specification error, as it does not require re-estimating the model. Fifth, it can be implemented either at the model level or at the individual level. In an application based on auto loans, we find that performance can be explained by a surprisingly small number of features. XPER decompositions are rather stable across metrics, yet some feature contributions switch sign across metrics. Our analysis also shows that explaining model forecasts and model performance are two distinct tasks.
translated by 谷歌翻译
We introduce a parametric view of non-local two-step denoisers, for which BM3D is a major representative, where quadratic risk minimization is leveraged for unsupervised optimization. Within this paradigm, we propose to extend the underlying mathematical parametric formulation by iteration. This generalization can be expected to further improve the denoising performance, somehow curbed by the impracticality of repeating the second stage for all two-step denoisers. The resulting formulation involves estimating an even larger amount of parameters in a unsupervised manner which is all the more challenging. Focusing on the parameterized form of NL-Ridge, the simplest but also most efficient non-local two-step denoiser, we propose a progressive scheme to approximate the parameters minimizing the risk. In the end, the denoised images are made up of iterative linear combinations of patches. Experiments on artificially noisy images but also on real-world noisy images demonstrate that our method compares favorably with the very best unsupervised denoisers such as WNNM, outperforming the recent deep-learning-based approaches, while being much faster.
translated by 谷歌翻译
The French National Institute of Geographical and Forest Information (IGN) has the mission to document and measure land-cover on French territory and provides referential geographical datasets, including high-resolution aerial images and topographic maps. The monitoring of land-cover plays a crucial role in land management and planning initiatives, which can have significant socio-economic and environmental impact. Together with remote sensing technologies, artificial intelligence (IA) promises to become a powerful tool in determining land-cover and its evolution. IGN is currently exploring the potential of IA in the production of high-resolution land cover maps. Notably, deep learning methods are employed to obtain a semantic segmentation of aerial images. However, territories as large as France imply heterogeneous contexts: variations in landscapes and image acquisition make it challenging to provide uniform, reliable and accurate results across all of France. The FLAIR-one dataset presented is part of the dataset currently used at IGN to establish the French national reference land cover map "Occupation du sol \`a grande \'echelle" (OCS- GE).
translated by 谷歌翻译
Reflection high-energy electron diffraction (RHEED) is a powerful tool in molecular beam epitaxy (MBE), but RHEED images are often difficult to interpret, requiring experienced operators. We present an approach for automated surveillance of GaAs substrate deoxidation in MBE reactors using deep learning based RHEED image-sequence classification. Our approach consists of an non-supervised auto-encoder (AE) for feature extraction, combined with a supervised convolutional classifier network. We demonstrate that our lightweight network model can accurately identify the exact deoxidation moment. Furthermore we show that the approach is very robust and allows accurate deoxidation detection during months without requiring re-training. The main advantage of the approach is that it can be applied to raw RHEED images without requiring further information such as the rotation angle, temperature, etc.
translated by 谷歌翻译
自动SQL生成一直是一个活跃的研究领域,旨在通过以特定意图编写自然语言而不是编写SQL来简化对数据库的访问。语义解析的当前SOTA方法取决于LLMS在基准数据集上实现高预测精度。这降低了其适用性,因为LLMS需要昂贵的GPU。此外,SOTA方法是未接地的,因此不能保证始终生成有效的SQL。在这里,我们提出了T5QL,这是一种新的SQL生成方法,当使用较小的LMS(即T5-base)与SOTA方法相比时,可以改善基准数据集中的性能。此外,保证T5QL始终使用无上下文语法来限制SQL生成的有效SQL。最后,我们表明,在两项任务中进行语义解析,候选SQLS的生成和重新排名,是一个有希望的研究途径,可以减少对大型LM的需求。
translated by 谷歌翻译
近年来,关于如何在公平限制下学习机器学习模型的越来越多的工作,通常在某些敏感属性方面表达。在这项工作中,我们考虑了对手对目标模型具有黑箱访问的设置,并表明对手可以利用有关该模型公平性的信息,以增强他对训练数据敏感属性的重建。更确切地说,我们提出了一种通用的重建校正方法,该方法将其作为对手进行的初始猜测,并纠正它以符合某些用户定义的约束(例如公平信息),同时最大程度地减少了对手猜测的变化。提出的方法对目标模型的类型,公平感知的学习方法以及对手的辅助知识不可知。为了评估我们的方法的适用性,我们对两种最先进的公平学习方法进行了彻底的实验评估,使用四个具有广泛公差的不同公平指标以及三个不同大小和敏感属性的数据集。实验结果证明了提出的方法改善训练集敏感属性的重建的有效性。
translated by 谷歌翻译
为了减轻模型中不希望的偏差的影响,几种方法建议预先处理输入数据集,以通过防止敏感属性的推断来减少歧视风险。不幸的是,这些预处理方法中的大多数导致一代新分布与原始分布有很大不同,因此通常导致不切实际的数据。作为副作用,这种新的数据分布意味着需要重新训练现有模型才能做出准确的预测。为了解决这个问题,我们提出了一种新颖的预处理方法,我们将根据保护组的分布转换为所选目标一个,并具有附加的隐私约束,其目的是防止敏感敏感的推断属性。更确切地说,我们利用Wasserstein Gan和Attgan框架的最新作品来实现数据点的最佳运输以及强制保护属性推断的歧视器。我们提出的方法可以保留数据的可解释性,并且可以在不定义敏感组的情况下使用。此外,我们的方法可以专门建模现有的最新方法,从而提出对这些方法的统一观点。最后,关于真实和合成数据集的一些实验表明,我们的方法能够隐藏敏感属性,同时限制数据的变形并改善了后续数据分析任务的公平性。
translated by 谷歌翻译